fix(openai): Attach response model with streamed Completions API#5557
Conversation
…openai/input-parameter
Semver Impact of This PR🟢 Patch (bug fixes) 📋 Changelog PreviewThis is how your changes will appear in the changelog. New Features ✨
Bug Fixes 🐛Openai
Other
Documentation 📚
Internal Changes 🔧Agents
Openai
Openai Agents
Other
🤖 This preview updates automatically when you update the PR. |
Codecov Results 📊✅ 1772 passed | ⏭️ 166 skipped | Total: 1938 | Pass Rate: 91.43% | Execution Time: 3m 25s All tests are passing successfully. ❌ Patch coverage is 0.00%. Project has 11731 uncovered lines. Files with missing lines (1)
Generated by Codecov Action |
…sponse-model-completions
| nonlocal ttft | ||
| count_tokens_manually = True | ||
| for x in old_iterator: | ||
| span.set_data(SPANDATA.GEN_AI_RESPONSE_MODEL, x.model) |
There was a problem hiding this comment.
Ungarded streamed model access can raise
Medium Severity
x.model is read directly in the streaming iterators before capture_internal_exceptions(). If a streamed chunk does not expose model (or uses a dict-like/event variant), this raises and interrupts iteration, making instrumentation break the caller’s stream instead of failing safely.
Additional Locations (1)
There was a problem hiding this comment.
Spec says it's required: developers.openai.com/api/reference/resources/chat/subresources/completions/streaming-events
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.


Description
Issues
Reminders
tox -e linters.feat:,fix:,ref:,meta:)